Goto

Collaborating Authors

 Cognitive Architectures


Robot Talk Episode 144 – Robot trust in humans, with Samuele Vinanzi

Robohub

Claire chatted to Samuele Vinanzi from Sheffield Hallam University about how robots can tell whether to trust or distrust people. Samuele Vinanzi is a Senior Lecturer in Robotics and Artificial Intelligence at Sheffield Hallam University. He specializes in Cognitive Robotics: an interdisciplinary field that integrates robotics, artificial intelligence, cognitive science, and psychology to create robots that perceive, reason, and interact like humans. His research focuses on enabling social collaboration between humans and robots, particularly emotional intelligence, intention reading, and artificial trust. His recent book, " In Robots We Trust ", explores trust relationships between humans and robots.


Executable Epistemology: The Structured Cognitive Loop as an Architecture of Intentional Understanding

Kim, Myung Ho

arXiv.org Artificial Intelligence

Large language models exhibit intelligence without genuine epistemic understanding, exposing a key gap: the absence of epistemic architecture. This paper introduces the Structured Cognitive Loop (SCL) as an executable epistemological framework for emergent intelligence. Unlike traditional AI research asking "what is intelligence?" (ontological), SCL asks "under what conditions does cognition emerge?" (epistemological). Grounded in philosophy of mind and cognitive phenomenology, SCL bridges conceptual philosophy and implementable cognition. Drawing on process philosophy, enactive cognition, and extended mind theory, we define intelligence not as a property but as a performed process -- a continuous loop of judgment, memory, control, action, and regulation. SCL makes three contributions. First, it operationalizes philosophical insights into computationally interpretable structures, enabling "executable epistemology" -- philosophy as structural experiment. Second, it shows that functional separation within cognitive architecture yields more coherent and interpretable behavior than monolithic prompt based systems, supported by agent evaluations. Third, it redefines intelligence: not representational accuracy but the capacity to reconstruct its own epistemic state through intentional understanding. This framework impacts philosophy of mind, epistemology, and AI. For philosophy, it allows theories of cognition to be enacted and tested. For AI, it grounds behavior in epistemic structure rather than statistical regularity. For epistemology, it frames knowledge not as truth possession but as continuous reconstruction within a phenomenologically coherent loop. We situate SCL within debates on cognitive phenomenology, emergence, normativity, and intentionality, arguing that real progress requires not larger models but architectures that realize cognitive principles structurally.


Beyond the Black Box: A Cognitive Architecture for Explainable and Aligned AI

Keyi, Hu

arXiv.org Artificial Intelligence

Current AI paradigms, as "architects of experience," face fundamental challenges in explainability and value alignment. This paper introduces "Weight-Calculatism," a novel cognitive architecture grounded in first principles, and demonstrates its potential as a viable pathway toward Artificial General Intelligence (AGI). The architecture deconstructs cognition into indivisible Logical Atoms and two fundamental operations: Pointing and Comparison. Decision-making is formalized through an interpretable Weight-Calculation model (Weight = Benefit * Probability), where all values are traceable to an auditable set of Initial Weights. This atomic decomposition enables radical explainability, intrinsic generality for novel situations, and traceable value alignment. We detail its implementation via a graph-algorithm-based computational engine and a global workspace workflow, supported by a preliminary code implementation and scenario validation. Results indicate that the architecture achieves transparent, human-like reasoning and robust learning in unprecedented scenarios, establishing a practical and theoretical foundation for building trustworthy and aligned AGI.


A Modular Cognitive Architecture for Assisted Reasoning: The Nemosine Framework

Melo, Edervaldo

arXiv.org Artificial Intelligence

This paper presents the Nemosine Framework, a modular cognitive architecture designed to support assisted reasoning, structured thinking, and systematic analysis. The model operates through functional cognitive modules ("personas") that organize tasks such as planning, evaluation, cross-checking, and narrative synthesis. The framework combines principles from metacognition, distributed cognition, and modular cognitive systems to offer an operational structure for assisted problem-solving and decision support. The architecture is documented through formal specification, internal consistency criteria, and reproducible structural components. The goal is to provide a clear conceptual basis for future computational implementations and to contribute to the study of symbolic-modular architectures for reasoning.


Bridging the Gap: Toward Cognitive Autonomy in Artificial Intelligence

Golilarz, Noorbakhsh Amiri, Penchala, Sindhuja, Rahimi, Shahram

arXiv.org Artificial Intelligence

Artificial intelligence has advanced rapidly across perception, language, reasoning, and multimodal domains. Yet despite these achievements, modern AI systems remain fundamentally limited in their ability to self-monitor, self-correct, and regulate their behavior autonomously in dynamic contexts. This paper identifies and analyzes seven core deficiencies that constrain contemporary AI models: the absence of intrinsic self-monitoring, lack of meta-cognitive awareness, fixed and non-adaptive learning mechanisms, inability to restructure goals, lack of representational maintenance, insufficient embodied feedback, and the absence of intrinsic agency. Alongside identifying these limitations, we also outline a forward-looking perspective on how AI may evolve beyond them through architectures that mirror neurocognitive principles. We argue that these structural limitations prevent current architectures, including deep learning and transformer-based systems, from achieving robust generalization, lifelong adaptability, and real-world autonomy. Drawing on a comparative analysis of artificial systems and biological cognition [7], and integrating insights from AI research, cognitive science, and neuroscience, we outline how these capabilities are absent in current models and why scaling alone cannot resolve them. We conclude by advocating for a paradigmatic shift toward cognitively grounded AI (cognitive autonomy) capable of self-directed adaptation, dynamic representation management, and intentional, goal-oriented behavior, paired with reformative oversight mechanisms [8] that ensure autonomous systems remain interpretable, governable, and aligned with human values.


Model of human cognition

Yonggang, Wu

arXiv.org Artificial Intelligence

Recently, there has been immense development in the field of artificial intelligence (AI) and computational neuroscienc e. Numerous architecture s and models have been implemented in artificial systems to challenge human intelligence, especially with the release of increasingly proficient large language model s (LLMs) . However, despite advancement s in LLMs, artificial systems still fall short in matching the human capacity for generalisation across diverse tasks and environments, thus being an overstatement to label the current generation s of LLMs as artificial general intelligence (AGI) . We propose that in order to create artificial systems with high generalisation capabilities, one must first examine and understand the fundamentals of human cognition through conceptual model s of the brain. This paper introduce s a theoretical model of cognition that integrates biological plausibility and functionality, encapsulating the fundamental elements of cognition and accounting for many psychological and behavioural regularities. The model consists of four main modules: the v isual processing module, the semantic module, the predictive module, and the executive module . The modules are discussed in chronological order, with each being affiliated with corresponding anatomical regions of the brain . Thereafter, the model is substantiated with real - world examples and that reflect its general problem - solving capabilities .


Robot Metacognition: Decision Making with Confidence for Tool Invention

Meera, Ajith Anil, Collis, Poppy, Arbuzova, Polina, Torres, Abián, Kinghorn, Paul F, Sanz, Ricardo, Lanillos, Pablo

arXiv.org Artificial Intelligence

Robots today often miss a key ingredient of truly intelligent behavior: the ability to reflect on their own cognitive processes and decisions. In humans, this self-monitoring or metacognition is crucial for learning, decision making and problem solving. For instance, they can evaluate how confident they are in performing a task, thus regulating their own behavior and allocating proper resources. Taking inspiration from neuroscience, we propose a robot metacognition architecture centered on confidence (a second-order judgment on decisions) and we demonstrate it on the use case of autonomous tool invention. We propose the use of confidence as a metacognitive measure within the robot decision making scheme. Confidence-informed robots can evaluate the reliability of their decisions, improving their robustness during real-world physical deployment. This form of robotic metacognition emphasizes embodied action monitoring as a means to achieve better informed decisions. We also highlight potential applications and research directions for robot metacognition.


Decision-Making Amid Information-Based Threats in Sociotechnical Systems: A Review

Allred, Aaron R., Richardson, Erin E., Bostrom, Sarah R., Crum, James, Spencer, Cara, Tossell, Chad, Niemeyer, Richard E., Hirshfield, Leanne, Hayman, Allison P. A.

arXiv.org Artificial Intelligence

Technological systems increasingly mediate human information exchange, spanning interactions among humans as well as between humans and artificial agents. The unprecedented scale and reliance on information disseminated through these systems substantially expand the scope of information-based influence that can both enable and undermine sound decision-making. Consequently, understanding and protecting decision-making today faces growing challenges, as individuals and organizations must navigate evolving opportunities and information-based threats across varied domains and information environments. While these risks are widely recognized, research remains fragmented: work evaluating information-based threat phenomena has progressed largely in isolation from foundational studies of human information processing. In this review, we synthesize insights from both domains to identify shared cognitive mechanisms that mediate vulnerability to information-based threats and shape behavioral outcomes. Finally, we outline directions for future research aimed at integrating these perspectives, emphasizing the importance of such integration for mitigating human vulnerabilities and aligning human-machine representations.


Autonomous Underwater Cognitive System for Adaptive Navigation: A SLAM-Integrated Cognitive Architecture

Jayarathne, K. A. I. N, Rathnayaka, R. M. N. M., Peiris, D. P. S. S.

arXiv.org Artificial Intelligence

Abstract--Deep-sea exploration faces critical challenges including disorientation, communication loss, and navigational failures in hostile underwater environments. This paper presents an Autonomous Underwater Cognitive System (AUCS) that integrates Simultaneous Localization and Mapping (SLAM) with a Soar-based cognitive architecture to enable adaptive navigation under dynamic oceanic conditions. The system combines multi-sensor fusion (SONAR, LiDAR, IMU, DVL) with cognitive reasoning capabilities including perception, attention, planning, and learning. Unlike conventional reactive SLAM systems, AUCS incorporates semantic understanding, adaptive sensor management, and memory-based learning to distinguish between dynamic and static objects, thus reducing false loop closures and improving long-term map consistency. This work addresses critical safety limitations observed in previous deep-sea missions and establishes a foundation for next-generation cognitive submersible systems.


Thanks to Reviewer # 1 and # 4 for pointing out that behavioral work in cognitive science suggests that people indeed

Neural Information Processing Systems

Thank you all for your helpful comments on our Comp Neuro paper. If the results of Figure 1 are indicative, this could further improve the results. The supervised training phase is depicted in the somewhat busy Fig. S2. While we disagree with Reviewer #2's opinion that the connection between neural regression and GPs is completely